×
AI Voice Scams Are Surging — Here’s How to Protect Yourself
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice-cloning scams pose growing threat: Starling Bank warns that millions could fall victim to fraudsters using artificial intelligence to replicate voices and deceive people into sending money.

  • The UK-based online bank reports that scammers can clone a person’s voice from just three seconds of audio found online, such as in social media videos.
  • Fraudsters then use the cloned voice to impersonate the victim and contact their friends or family members, asking for money under false pretenses.

Survey reveals alarming trends: A recent study conducted by Starling Bank and Mortar Research highlights the prevalence and potential impact of AI voice-cloning scams.

  • Over a quarter of respondents reported being targeted by such scams in the past year.
  • 46% of those surveyed were unaware that these scams existed.
  • 8% of respondents admitted they would send money if requested by a friend or family member, even if the call seemed suspicious.

Cybersecurity expert sounds alarm: Lisa Grahame, chief information security officer at Starling Bank, emphasizes the need for increased awareness and caution.

  • Grahame points out that people often post content online containing their voice without realizing it could make them vulnerable to fraudsters.
  • The bank recommends establishing a “safe phrase” with loved ones to verify identity during phone calls.

Safeguarding against voice-cloning scams: Starling Bank offers advice on how to protect oneself from these sophisticated frauds.

  • The recommended “safe phrase” should be simple, random, and easy to remember, but different from other passwords.
  • Sharing the safe phrase via text is discouraged, but if necessary, the message should be deleted once received.

AI advancements raise concerns: The increasing sophistication of AI in mimicking human voices has sparked worries about potential misuse.

  • There are growing fears about AI’s ability to help criminals access bank accounts and spread misinformation.
  • OpenAI, the creator of ChatGPT, has developed a voice replication tool called Voice Engine but has not made it publicly available due to concerns about synthetic voice misuse.

Broader implications for AI security: The rise of AI voice-cloning scams underscores the need for enhanced cybersecurity measures and public awareness.

  • As AI technology continues to advance, it’s likely that new forms of fraud and deception will emerge, requiring ongoing vigilance from both individuals and institutions.
  • The situation highlights the importance of responsible AI development and deployment, balancing innovation with safeguards against potential misuse.
This bank says ‘millions’ of people could be targeted by AI voice-cloning scams

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.